17 research outputs found

    Bayesian Inference for partially observed SDEs Driven by Fractional Brownian Motion

    Full text link
    We consider continuous-time diffusion models driven by fractional Brownian motion. Observations are assumed to possess a non-trivial likelihood given the latent path. Due to the non-Markovianity and high-dimensionality of the latent paths, estimating posterior expectations is a computationally challenging undertaking. We present a reparameterization framework based on the Davies and Harte method for sampling stationary Gaussian processes and use this framework to construct a Markov chain Monte Carlo algorithm that allows computationally efficient Bayesian inference. The Markov chain Monte Carlo algorithm is based on a version of hybrid Monte Carlo that delivers increased efficiency when applied on the high-dimensional latent variables arising in this context. We specify the methodology on a stochastic volatility model allowing for memory in the volatility increments through a fractional specification. The methodology is illustrated on simulated data and on the S&P500/VIX time series and is shown to be effective. Contrary to a long range dependence attribute of such models often assumed in the literature, with Hurst parameter larger than 1/2, the posterior distribution favours values smaller than 1/2, pointing towards medium range dependence

    Bayesian inference for indirectly observed stochastic processes, applications to epidemic modelling

    Get PDF
    Stochastic processes are mathematical objects that offer a probabilistic representation of how some quantities evolve in time. In this thesis we focus on estimating the trajectory and parameters of dynamical systems in cases where only indirect observations of the driving stochastic process are available. We have first explored means to use weekly recorded numbers of cases of Influenza to capture how the frequency and nature of contacts made with infected individuals evolved in time. The latter was modelled with diffusions and can be used to quantify the impact of varying drivers of epidemics as holidays, climate, or prevention interventions. Following this idea, we have estimated how the frequency of condom use has evolved during the intervention of the Gates Foundation against HIV in India. In this setting, the available estimates of the proportion of individuals infected with HIV were not only indirect but also very scarce observations, leading to specific difficulties. At last, we developed a methodology for fractional Brownian motions (fBM), here a fractional stochastic volatility model, indirectly observed through market prices. The intractability of the likelihood function, requiring augmentation of the parameter space with the diffusion path, is ubiquitous in this thesis. We aimed for inference methods robust to refinements in time discretisations, made necessary to enforce accuracy of Euler schemes. The particle Marginal Metropolis Hastings (PMMH) algorithm exhibits this mesh free property. We propose the use of fast approximate filters as a pre-exploration tool to estimate the shape of the target density, for a quicker and more robust adaptation phase of the asymptotically exact algorithm. The fBM problem could not be treated with the PMMH, which required an alternative methodology based on reparameterisation and advanced Hamiltonian Monte Carlo techniques on the diffusion pathspace, that would also be applicable in the Markovian setting

    Nothing but the truth: Consistency and efficiency of the list experiment method for the measurement of sensitive health behaviours

    Get PDF
    Rationale: Social desirability bias, which is the tendency to under-report socially, undesirable health behaviours, significantly distorts information on sensitive behaviours, gained from self-reports and prevents accurate estimation of the prevalence of those, behaviours. We contribute to a growing body of literature that seeks to assess the performance of the list experiment method to improve estimation of these sensitive health behaviours. Method: We use a double-list experiment design in which respondents serve as the treatment group for one list and as the control group for the other list to estimate the prevalence of two sensitive health behaviours in different settings: condom use among 500 female sex workers in urban Senegal and physical intimate partner violence among 1700 partnered women in rural Burkina Faso. First, to assess whether the list experiment improves the accuracy of estimations of the prevalence of sensitive behaviours, we compare the prevalence rates estimated from self-reports with those elicited through the list experiment. Second, we test whether the prevalence rates of the sensitive behaviours obtained using the double-list design are similar, and we estimate the reduction in the standard errors obtained with this design. Finally, we compare the results obtained through another indirect elicitation method, the polling vote method. Results: We show that the list experiment method reduces misreporting by 17 percentage points for condom use and 16–20 percentage points for intimate partner violence. Exploiting the double-list experiment design, we also demonstrate that the prevalence estimates obtained through the use of the two lists are identical in the full sample and across sub-groups and that the double-list design reduces the standard errors by approximately 40% compared to the standard errors in the simple list design. Finally, we show that the list experiment method leads to a higher estimation of the prevalence of sensitive behaviours than the polling vote method. Conclusion: The study suggests that list experiments are an effective method to improve estimation of the prevalence of sensitive health behaviours

    Accounting for non-stationarity in epidemiology by embedding time-varying parameters in stochastic models

    No full text
    International audienceThe spread of disease through human populations is complex. The characteristics of disease propagation evolve with time, as a result of a multitude of environmental and anthropic factors, this non-stationarity is a key factor in this huge complexity. In the absence of appropriate external data sources, to correctly describe the disease propagation, we explore a flexible approach, based on stochastic models for the disease dynamics, and on diffusion processes for the parameter dynamics. Using such a diffusion process has the advantage of not requiring a specific mathematical function for the parameter dynamics. Coupled with particle MCMC, this approach allows us to reconstruct the time evolution of some key parameters (average transmission rate for instance). Thus, by capturing the time-varying nature of the different mechanisms involved in disease propagation, the epidemic can be described. Firstly we demonstrate the efficiency of this methodology on a toy model, where the parameters and the observation process are known. Applied then to real datasets, our methodology is able, based solely on simple stochastic models, to reconstruct complex epidemics , such as flu or dengue, over long time periods. Hence we demonstrate that time-varying parameters can improve the accuracy of model performances, and we suggest that our methodology can be used as a first step towards a better understanding of a complex epidemic, in situation where data is limited and/or uncertain
    corecore